Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency. In this paper, we tackle this issue by implementing explainable AI methods for black-box neural networks. This work focuses on the context of online and blended learning and the use case of student success prediction models. We use a pairwise study design, enabling us to investigate controlled differences between pairs of courses. Our analyses cover five course pairs that differ in one educationally relevant aspect and two popular instance-based explainable AI methods (LIME and SHAP). We quantitatively compare the distances between the explanations across courses and methods. We then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy. All code, extended results, and the interview protocol are provided at https://github.com/epfl-ml4ed/trusting-explainers.
translated by 谷歌翻译
Time series is the most prevalent form of input data for educational prediction tasks. The vast majority of research using time series data focuses on hand-crafted features, designed by experts for predictive performance and interpretability. However, extracting these features is labor-intensive for humans and computers. In this paper, we propose an approach that utilizes irregular multivariate time series modeling with graph neural networks to achieve comparable or better accuracy with raw time series clickstreams in comparison to hand-crafted features. Furthermore, we extend concept activation vectors for interpretability in raw time series models. We analyze these advances in the education domain, addressing the task of early student performance prediction for downstream targeted interventions and instructional support. Our experimental analysis on 23 MOOCs with millions of combined interactions over six behavioral dimensions show that models designed with our approach can (i) beat state-of-the-art educational time series baselines with no feature extraction and (ii) provide interpretable insights for personalized interventions. Source code: https://github.com/epfl-ml4ed/ripple/.
translated by 谷歌翻译
自然语言处理(NLP)已越来越多地用于提供教育应用的适应性。但是,最近的研究突出了预训练的语言模型中的各种偏见。尽管现有研究调查了不同领域的偏见,但它们在解决有关教育和多语言语料库的细粒度分析方面受到限制。在这项工作中,我们通过在五年内从大学生收集的9,165个德国同行评审的语料库中分析了跨文本和多个架构的偏见。值得注意的是,我们的语料库包括来自同行评审接收者以及人口统计属性的帮助,质量和关键方面等级等标签。我们对(1)与聚类标签有关的(2)最常见的预训练的德语模型(T5,BERT和GPT-2)和Glove Embeddings进行了单词嵌入关联测试(WEAT)测试(WEAT)分析(1)我们收集的语料库,以及(3)对我们收集的数据集进行微调后的语言模型。与我们的最初期望相反,我们发现我们收集的语料库在共同出现分析或手套嵌入中没有揭示许多偏见。但是,预先训练的德语模型发现了实质性的概念,种族和性别偏见,并且在同行评审数据的微调过程中,概念和种族轴之间的偏见发生了重大变化。通过我们的研究,我们的目标是通过新颖的数据集,对自然语言教育数据的偏见的理解以及不抵消语言模型中的教育任务偏见的潜在危害,为第四联合国的可持续发展目标(质量教育)做出贡献。
translated by 谷歌翻译
神经网络无处不在用于教育的应用机器学习。他们在预测性能方面的普遍成功伴随着严重的弱点,缺乏决策的解释性,尤其是在以人为中心的领域中。我们实施了五种最先进的方法,用于解释黑盒机器学习模型(Lime,PermiputationShap,kernelshap,dice,CEM),并检查每种方法的优势在学生绩效预测的下游任务上,用于五个大规模开放的在线在线公开培训班。我们的实验表明,解释者的家属在与同一代表学生集的同一双向LSTM模型中相互重要性不同意。我们使用主成分分析,詹森 - 香农距离以及Spearman的等级相关性,以跨方法和课程进行定量的盘问解释。此外,我们验证了基于课程的先决条件之间的解释器表现。我们的结果得出的结论是,解释器的选择是一个重要的决定,实际上对预测结果的解释至关重要,甚至比模型的课程更重要。源代码和模型在http://github.com/epfl-ml4ed/evaluating-explainers上发布。
translated by 谷歌翻译
大型文本语料库培训的基于变压器的语言模型在自然语言处理社区中享有巨大的普及,并且通常用作下游任务的起点。虽然这些模型是不可否认的,但这是一种挑战,以量化超出传统准确度指标的性能。在本文中,我们通过在培训过程的顺序阶段的获取知识快照来比较基于BERT的语言模型。可以通过查询具有探测任务的屏蔽语言模型来发现来自培训语料库的结构化关系。我们提出了一种通过在罗伯塔早期训练的各个阶段的CLOZE“填空”陈述中产生知识图表提取物来揭示知识收集时间表的方法。我们将该分析扩展到BERT模型(Distilbert,Bert-Base,Roberta)的预磨损变体进行比较。这项工作提出了通过知识图提取(GED,Graph2VEC)来比较语言模型的定量框架,并且展示了语音分析(波动)以确定每个模型变体的语言强度。使用这些指标,机器学习从业者可以比较模型,诊断其模型的行为优势和劣势,并确定新的目标数据集以提高模型性能。
translated by 谷歌翻译
Much recent work in task-oriented parsing has focused on finding a middle ground between flat slots and intents, which are inexpressive but easy to annotate, and powerful representations such as the lambda calculus, which are expressive but costly to annotate. This paper continues the exploration of task-oriented parsing by introducing a new dataset for parsing pizza and drink orders, whose semantics cannot be captured by flat slots and intents. We perform an extensive evaluation of deep-learning techniques for task-oriented parsing on this dataset, including different flavors of seq2seq systems and RNNGs. The dataset comes in two main versions, one in a recently introduced utterance-level hierarchical notation that we call TOP, and one whose targets are executable representations (EXR). We demonstrate empirically that training the parser to directly generate EXR notation not only solves the problem of entity resolution in one fell swoop and overcomes a number of expressive limitations of TOP notation, but also results in significantly greater parsing accuracy.
translated by 谷歌翻译
计量经济学和机器学习中的各种问题,包括仪器变量回归和钟声残留最小化,可以表达为满足一组条件矩限制(CMR)。我们得出了满足CMR的一般游戏理论策略,该策略可扩展到非线性问题,可与基于梯度的优化相提并论,并且能够考虑有限的样本不确定性。我们恢复了Dikkala等人的方法。和Dai等。作为我们一般框架的特殊情况,请先详细介绍各种扩展,以及如何有效地解决CMR定义的游戏。
translated by 谷歌翻译
我们考虑模仿学习问题,在这些问题中,专家可以在演示时间和测试时间内访问学习者隐藏的每个集合上下文。尽管学习者可能无法通过考虑整个国家和行动的历史来早期在情节中准确地重现专家行为,但他们可能最终能够识别上下文并像专家一样行事。我们证明,与非政策的方法相比,在政策模仿学习算法(有或不访问可查询的专家)都可以更好地处理这些渐近性问题,并且能够避免闩锁行为(对过去的动作的天真重复)这困扰着后者。我们在玩具匪徒域中进行实验,该实验表明,与统一的policy方法的均匀性能相比,非政策方法是否能够渐近地匹配专家的性能。我们证明,在几个连续的控制任务上,政策方法能够使用历史记录来识别上下文,而在访问历史记录时,违反政策方法实际上表现较差。
translated by 谷歌翻译
在线模仿学习是如何最好地访问环境或准确的模拟器的问题的问题。先前的工作表明,在无限的样本制度中,匹配的确切力矩达到了与专家政策的价值等效性。但是,在有限的样本制度中,即使没有优化错误,经验差异也会导致性能差距,该差距以$ h^2 / n $的行为克隆缩放,在线时刻$ h / \ sqrt {n} $匹配,其中$ h $是地平线,$ n $是专家数据集的大小。我们介绍了重播估算的技术以减少这种经验差异:通过反复在随机模拟器中执行缓存的专家动作,我们计算了一个更平滑的专家访问分布估算以匹配的。在存在一般函数近似的情况下,我们证明了一个元定理,可以减少离线分类参数估计误差的方法差距(即学习专家策略)。在表格设置或使用线性函数近似中,我们的元定理表明,我们方法产生的性能差距达到了最佳$ \ widetilde {o} \ left(\ min(\ min({h^h^{3/2}}}} / {n} ,{h} / {\ sqrt {n}} \ right)$依赖关系,在与先前的工作相比明显弱的假设下。我们在多个连续的控制任务上实施了多个方法的多次实例化,并发现我们能够显着提高策略绩效跨各种数据集尺寸。
translated by 谷歌翻译
在这项工作中,我们提出了一种基于ADHOC网络的基于图卷积神经网络(GCN)的调度算法。特别是,我们考虑一个称为$ k $ -tolerant冲突图模型的广义干扰模型,并为众所周知的最大重量调度算法设计了有效的近似。这项工作的一个值得注意的特征是所提出的方法不需要标记的数据集(NP-难以计算)来训练神经网络。相反,我们设计了一种利用现有贪婪方法的损失函数,并列进GCN,提高了贪婪方法的性能。我们广泛的数值实验表明,使用我们的GCN方法,我们可以显着(4美元 - 20美元),提高传统贪婪方法的表现。
translated by 谷歌翻译